37 research outputs found

    e3nn: Euclidean Neural Networks

    Full text link
    We present e3nn, a generalized framework for creating E(3) equivariant trainable functions, also known as Euclidean neural networks. e3nn naturally operates on geometry and geometric tensors that describe systems in 3D and transform predictably under a change of coordinate system. The core of e3nn are equivariant operations such as the TensorProduct class or the spherical harmonics functions that can be composed to create more complex modules such as convolutions and attention mechanisms. These core operations of e3nn can be used to efficiently articulate Tensor Field Networks, 3D Steerable CNNs, Clebsch-Gordan Networks, SE(3) Transformers and other E(3) equivariant networks.Comment: draf

    Quantifying chemical short-range order in metallic alloys

    Full text link
    Metallic alloys often form phases - known as solid solutions - in which chemical elements are spread out on the same crystal lattice in an almost random manner. The tendency of certain chemical motifs to be more common than others is known as chemical short-range order (SRO) and it has received substantial consideration in alloys with multiple chemical elements present in large concentrations due to their extreme configurational complexity (e.g., high-entropy alloys). Short-range order renders solid solutions "slightly less random than completely random", which is a physically intuitive picture, but not easily quantifiable due to the sheer number of possible chemical motifs and their subtle spatial distribution on the lattice. Here we present a multiscale method to predict and quantify the SRO state of an alloy with atomic resolution, incorporating machine learning techniques to bridge the gap between electronic-structure calculations and the characteristic length scale of SRO. The result is an approach capable of predicting SRO length scale in agreement with experimental measurements while comprehensively correlating SRO with fundamental quantities such as local lattice distortions. This work advances the quantitative understanding of solid-solution phases, paving the way for SRO rigorous incorporation into predictive mechanical and thermodynamic models.Comment: 8 pages, 4 figure

    Finding symmetry breaking order parameters with Euclidean neural networks

    Get PDF
    Curie's principle states that “when effects show certain asymmetry, this asymmetry must be found in the causes that gave rise to them.” We demonstrate that symmetry equivariant neural networks uphold Curie's principle and can be used to articulate many symmetry-relevant scientific questions as simple optimization problems. We prove these properties mathematically and demonstrate them numerically by training a Euclidean symmetry equivariant neural network to learn symmetry breaking input to deform a square into a rectangle and to generate octahedra tilting patterns in perovskites

    Learning Integrable Dynamics with Action-Angle Networks

    Full text link
    Machine learning has become increasingly popular for efficiently modelling the dynamics of complex physical systems, demonstrating a capability to learn effective models for dynamics which ignore redundant degrees of freedom. Learned simulators typically predict the evolution of the system in a step-by-step manner with numerical integration techniques. However, such models often suffer from instability over long roll-outs due to the accumulation of both estimation and integration error at each prediction step. Here, we propose an alternative construction for learned physical simulators that are inspired by the concept of action-angle coordinates from classical mechanics for describing integrable systems. We propose Action-Angle Networks, which learn a nonlinear transformation from input coordinates to the action-angle space, where evolution of the system is linear. Unlike traditional learned simulators, Action-Angle Networks do not employ any higher-order numerical integration methods, making them extremely efficient at modelling the dynamics of integrable physical systems.Comment: Accepted at Machine Learning and the Physical Sciences workshop at NeurIPS 202

    A General Framework for Equivariant Neural Networks on Reductive Lie Groups

    Full text link
    Reductive Lie Groups, such as the orthogonal groups, the Lorentz group, or the unitary groups, play essential roles across scientific fields as diverse as high energy physics, quantum mechanics, quantum chromodynamics, molecular dynamics, computer vision, and imaging. In this paper, we present a general Equivariant Neural Network architecture capable of respecting the symmetries of the finite-dimensional representations of any reductive Lie Group G. Our approach generalizes the successful ACE and MACE architectures for atomistic point clouds to any data equivariant to a reductive Lie group action. We also introduce the lie-nn software library, which provides all the necessary tools to develop and implement such general G-equivariant neural networks. It implements routines for the reduction of generic tensor products of representations into irreducible representations, making it easy to apply our architecture to a wide range of problems and groups. The generality and performance of our approach are demonstrated by applying it to the tasks of top quark decay tagging (Lorentz group) and shape recognition (orthogonal group)

    Simulations of Pion Production in the Daedalus Target

    Get PDF
    DAEÎŽALUS, the Decay At-rest Experiment for ÎŽCP at a Laboratory for Underground Science will look for evidence of CP-violation in the neutrino sector, an ingredient in theories that seek to explain the matter/antimatter asymmetry in our universe. It will make a precision measurement of the oscillations of muon antineutrinos to electron antineutrinos using multiple neutrino sources created by low-cost compact cyclotrons. The experiment utilizes decay-at-rest neutrino beams produced by 800 MeV protons impinging a beam target of graphite and copper. Two well established Monte Carlo codes, MARS and GEANT4, have been used to optimise the design and the performance of the target. A study of the results obtained with these two codes is presented in this paper

    Sign and Basis Invariant Networks for Spectral Graph Representation Learning

    Full text link
    Many machine learning tasks involve processing eigenvectors derived from data. Especially valuable are Laplacian eigenvectors, which capture useful structural information about graphs and other geometric objects. However, ambiguities arise when computing eigenvectors: for each eigenvector vv, the sign flipped −v-v is also an eigenvector. More generally, higher dimensional eigenspaces contain infinitely many choices of basis eigenvectors. These ambiguities make it a challenge to process eigenvectors and eigenspaces in a consistent way. In this work we introduce SignNet and BasisNet -- new neural architectures that are invariant to all requisite symmetries and hence process collections of eigenspaces in a principled manner. Our networks are universal, i.e., they can approximate any continuous function of eigenvectors with the proper invariances. They are also theoretically strong for graph representation learning -- they can approximate any spectral graph convolution, can compute spectral invariants that go beyond message passing neural networks, and can provably simulate previously proposed graph positional encodings. Experiments show the strength of our networks for molecular graph regression, learning expressive graph representations, and learning implicit neural representations on triangle meshes. Our code is available at https://github.com/cptq/SignNet-BasisNet .Comment: 35 page
    corecore